245 research outputs found

    A Reconfigurable High-Performance Optical Data Center Architecture

    Full text link
    Optical data center network architectures are becoming attractive because of their low energy consumption, large bandwidth, and low cabling complexity. In\cite{Xu1605:PODCA}, an AWGR-based passive optical data center architecture (PODCA) is presented. Compared with other optical data center architectures, e.g., DOS \cite{ye2010scalable}, Proteus \cite{singla2010proteus}, and Petabit \cite{xia2010petabit}, PODCA can save up to 90%\% on power consumption and 88%\% in cost. Also, average latency can be low as 9 μ\mus at close to 100%\% throughput. However, PODCA is not reconfigurable and cannot optimize the network topology to dynamic traffic. In this paper, we present a novel, scalable and flexible reconfigurable architecture called RODCA. RODCA is built on and augments PODCA with a flexible localized intra-cluster optical network. With the reconfigurable intra-cluster network, racks with mutually large traffic can be located within the same cluster, and share the large bandwidth of the intra-cluster network. We present an algorithm for DCN topology reconfiguration, and present simulation results to demonstrate the effectiveness of reconfiguration

    Impact of Wavelength and Modulation Conversion on Transluscent Elastic Optical Networks Using MILP

    Full text link
    Compared to legacy wavelength division multiplexing networks, elastic optical networks (EON) have added flexibility to network deployment and management. EONs can include previously available technology, such as signal regeneration and wavelength conversion, as well as new features such as finer-granularity spectrum assignment and modulation conversion. Yet each added feature adds to the cost of the network. In order to quantify the potential benefit of each technology, we present a link-based mixed-integer linear programming (MILP) formulation to solve the optimal resource allocation problem. We then propose a recursive model in order to either augment existing network deployments or speed up the resource allocation computation time for larger networks with higher traffic demand requirements than can be solved using an MILP. We show through simulation that systems equipped with signal regenerators or wavelength converters require a notably smaller total bandwidth, depending on the topology of the network. We also show that the suboptimal recursive solution speeds up the calculation and makes the running-time more predictable, compared to the optimal MILP

    On the Optimality of Scheduling Dependent MapReduce Tasks on Heterogeneous Machines

    Full text link
    MapReduce is the most popular big-data computation framework, motivating many research topics. A MapReduce job consists of two successive phases, i.e. map phase and reduce phase. Each phase can be divided into multiple tasks. A reduce task can only start when all the map tasks finish processing. A job is successfully completed when all its map and reduce tasks are complete. The task of optimally scheduling the different tasks on different servers to minimize the weighted completion time is an open problem, and is the focus of this paper. In this paper, we give an approximation ratio with a competitive ratio 2(1+(m−1)/D)+12(1+(m-1)/D)+1, where mm is the number of servers and D≥1D\ge 1 is the task-skewness product. We implement the proposed algorithm on Hadoop framework, and compare with three baseline schedulers. Results show that our DMRS algorithm can outperform baseline schedulers by up to 82%82\%

    Chronos: A Unifying Optimization Framework for Speculative Execution of Deadline-critical MapReduce Jobs

    Full text link
    Meeting desired application deadlines in cloud processing systems such as MapReduce is crucial as the nature of cloud applications is becoming increasingly mission-critical and deadline-sensitive. It has been shown that the execution times of MapReduce jobs are often adversely impacted by a few slow tasks, known as stragglers, which result in high latency and deadline violations. While a number of strategies have been developed in existing work to mitigate stragglers by launching speculative or clone task attempts, none of them provides a quantitative framework that optimizes the speculative execution for offering guaranteed Service Level Agreements (SLAs) to meet application deadlines. In this paper, we bring several speculative scheduling strategies together under a unifying optimization framework, called Chronos, which defines a new metric, Probability of Completion before Deadlines (PoCD), to measure the probability that MapReduce jobs meet their desired deadlines. We systematically analyze PoCD for popular strategies including Clone, Speculative-Restart, and Speculative-Resume, and quantify their PoCD in closed-form. The result illuminates an important tradeoff between PoCD and the cost of speculative execution, measured by the total (virtual) machine time required under different strategies. We propose an optimization problem to jointly optimize PoCD and execution cost in different strategies, and develop an algorithmic solution that is guaranteed to be optimal. Chronos is prototyped on Hadoop MapReduce and evaluated against three baseline strategies using both experiments and trace-driven simulations, achieving 50% net utility increase with up to 80% PoCD and 88% cost improvements

    Trading Off Computation with Transmission in Status Update Systems

    Full text link
    This paper is motivated by emerging edge computing applications in which generated data are pre-processed at the source and then transmitted to an edge server. In such a scenario, there is typically a tradeoff between the amount of pre-processing and the amount of data to be transmitted. We model such a system by considering two non-preemptive queues in tandem whose service times are independent over time but the transmission service time is dependent on the computation service time in mean value. The first queue is in M/GI/1/1 form with a single server, memoryless exponential arrivals, general independent service and no extra buffer to save incoming status update packets. The second queue is in GI/M/1/2* form with a single server receiving packets from the first queue, memoryless service and a single data buffer to save incoming packets. Additionally, mean service times of the first and second queues are dependent through a deterministic monotonic function. We perform stationary distribution analysis in this system and obtain closed form expressions for average age of information (AoI) and average peak AoI. Our numerical results illustrate the analytical findings and highlight the tradeoff between average AoI and average peak AoI generated by the tandem nature of the queueing system with dependent service times

    On Age and Value of Information in Status Update Systems

    Full text link
    Motivated by the inherent value of packets arising in many cyber-physical applications (e.g., due to precision of the information content or an alarm message), we consider status update systems with update packets carrying values as well as their generation time stamps. Once generated, a status update packet has a random initial value and a deterministic deadline after which it is not useful (ultimate staleness). In our model, value of a packet decreases in time (even after reception) starting from its generation to ultimate staleness when it vanishes. The value of information (VoI) at the receiver is additive in that the VoI is the sum of the current values of all packets held by the receiver. We investigate various queuing disciplines under potential dependence between value and service time and provide closed form expressions for average VoI at the receiver. Numerical results illustrate the average VoI for different scenarios and the contrast between average age of information (AoI) and average VoI

    Optimizing Information Freshness Through Computation-Transmission Tradeoff and Queue Management in Edge Computing

    Full text link
    Edge computing applications typically require generated data to be preprocessed at the source and then transmitted to an edge server. In such cases, transmission time and preprocessing time are coupled, yielding a tradeoff between them to achieve the targeted objective. This paper presents analysis of such a system with the objective of optimizing freshness of received data at the edge server. We model this system as two queues in tandem whose service times are independent over time but the transmission service time is monotonically dependent on the computation service time in mean value. This dependence captures the natural decrease in transmission time due to lower offloaded computation. We analyze various queue management schemes in this tandem queue where the first queue has a single server, Poisson packet arrivals, general independent service and no extra buffer to save incoming status update packets. The second queue has a single server receiving packets from the first queue and service is memoryless. We consider the second queue in two forms: (i) No data buffer and (ii) One unit data buffer and last come first serve with discarding. We analyze various non-preemptive as well as preemptive cases. We perform stationary distribution analysis and obtain closed form expressions for average age of information (AoI) and average peak AoI. Our numerical results illustrate analytical findings on how computation and transmission times could be traded off to optimize AoI and reveal a consequent tradeoff between average AoI and average peak AoI.Comment: arXiv admin note: substantial text overlap with arXiv:1907.0092

    Relative Age of Information: A New Metric for Status Update Systems

    Full text link
    In this paper, we introduce a new data freshness metric, relative Age of Information (rAoI), and examine it in a single server system with various packet management schemes. The (classical) AoI metric was introduced to measure the staleness of status updates at the receiving end with respect to their generation at the source. This metric addresses systems where the timings of update generation at the source are absolute and can be designed separately or jointly with the transmission schedules. In many decentralized applications, transmission schedules are blind to update generation timing, and the transmitter can know the timing of an update packet only after it arrives. As such, an update becomes stale after a new one arrives. The rAoI metric measures how fresh the data is at the receiver with respect to the data at the transmitter. It introduces a particularly explicit dependence on the arrival process in the evaluation of age. We investigate several queuing disciplines and provide closed form expressions for rAoI and numerical comparisons

    Waiting before Serving: A Companion to Packet Management in Status Update Systems

    Full text link
    In this paper, we explore the potential of server waiting before packet transmission in improving the Age of Information (AoI) in status update systems. We consider a non-preemptive queue with Poisson arrivals and independent general service distribution and we incorporate waiting before serving in two packet management schemes: M/GI/1/1 and M/GI/1/2∗2^*. In M/GI/1/1 scheme, the server waits for a deterministic time immediately after a packet enters the server. In M/GI/1/2∗2^* scheme, depending on idle or busy system state, the server waits for a deterministic time before starting service of the packet. In both cases, if a potential newer arrival is captured existing packet is discarded. Different from most existing works, we analyze AoI evolution by indexing the incoming packets, which is enabled by an alternative method of partitioning the area under the evolution of instantaneous AoI to calculate its time average. We obtain expressions for average and average peak AoI for both queueing disciplines with waiting. Our numerical results demonstrate that waiting before service can bring significant improvement in average age, particularly, for heavy-tailed service distributions. This improvement comes at the expense of an increase in average peak AoI. We highlight the trade-off between average and average peak AoI generated by waiting before serving

    On double-link failure recovery in WDM optical networks

    Get PDF
    Abstract — Network survivability is a crucial requirement in high-speed optical networks. Typical approaches of providing survivability have considered the failure of a single component such as a link or a node. In this paper, we consider a failure model in which any two links in the network may fail in an arbitrary order. Three loopback methods of recovering from double-link failures are presented. The first two methods require the identification of the failed links, while the third one does not. However, precomputing the backup paths for the third method is more difficult than for the first two. A heuristic algorithm that pre-computes backup paths for links is presented. Numerical results comparing the performance of our algorithm with other approaches suggests that it is possible to achieve  ¢¡£¡¥¤ recovery from double-link failures with a modest increase in backup capacity. Index Terms—Wavelength division multiplexing (WDM), loopback recovery, restoration, double-link failures, 3-edge-connected graph. I
    • …
    corecore